23 research outputs found

    The Parity Argument for Extended Consciousness

    Get PDF
    Andy Clark and David Chalmers (1998) argue that certain mental states and processes can be partially constituted by objects located beyond one’s brain and body: this is their extended mind thesis (EM). But they maintain that consciousness relies on processing that is too high in speed and bandwidth to be realized outside the body (see Chalmers, 2008, and Clark, 2009). I evaluate Clark’s and Chalmers’ reason for denying that consciousness extends while still supporting unconscious state extension. I argue that their reason is not well grounded and does not hold up against foreseeable advances in technology. I conclude that their current position needs re-evaluation. If their original parity argument works as a defence of EM, they have yet to identify a good reason why it does not also work as a defence of extended consciousness. I end by advancing a parity argument for extended consciousness and consider some possible replies

    AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI

    Get PDF
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users

    Extended mathematical cognition: external representations with non-derived content

    Get PDF
    Funder: Social Sciences and Humanities Research Council of Canada; doi: http://dx.doi.org/10.13039/501100000155Abstract: Vehicle externalism maintains that the vehicles of our mental representations can be located outside of the head, that is, they need not be instantiated by neurons located inside the brain of the cogniser. But some disagree, insisting that ‘non-derived’, or ‘original’, content is the mark of the cognitive and that only biologically instantiated representational vehicles can have non-derived content, while the contents of all extra-neural representational vehicles are derived and thus lie outside the scope of the cognitive. In this paper we develop one aspect of Menary’s vehicle externalist theory of cognitive integration—the process of enculturation—to respond to this longstanding objection. We offer examples of how expert mathematicians introduce new symbols to represent new mathematical possibilities that are not yet understood, and we argue that these new symbols have genuine non-derived content, that is, content that is not dependent on an act of interpretation by a cognitive agent and that does not derive from conventional associations, as many linguistic representations do

    Overcoming deadlock: Scientific and ethical reasons to embrace the extended mind thesis

    Get PDF
    The extended mind thesis maintains that while minds may be centrally located in one?s brain-and-body, they are sometimes partly constituted by tools in our environment. Critics argue that we have no reason to move from the claim that cognition is embedded in the environment to the stronger claim that cognition can be constituted by the environment. I will argue that there are normative reasons, both scientific and ethical, for preferring the extended account of the mind to the rival embedded account.Leverhulme Centre for the Future of Intelligence, Leverhulme Trust, under Grant RC-2015-067

    AI Extenders and the Ethics of Mental Health

    Get PDF
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this chapter we suggest that using AI extenders, i.e., tightly coupled cognitive extenders that are imbued with machine learning and other ‘artificially intelligent’ tools, presents both new ethical challenges and opportunities for mental health. We focus on several mental health conditions that can develop differently by the use of AI extenders for people with cognitive disorders and then discuss some of the related opportunities and challenges

    Responsible AI – Two Frameworks for Ethical Design Practice

    Get PDF
    In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating ethical analysis into engineering practice to help address this challenge. We then provide the outcomes of an ethical analysis informed by these frameworks, conducted within the specific context of internet- delivered therapy in digital mental health. We hope both the frameworks and analysis can provide tools and insights, not only for the context of digital healthcare, but for data-enabled and intelligent technology development more broadly

    AI Extenders

    Get PDF
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users.Leverhulme Centre for the Future of Intelligence, Leverhulme Trust, under Grant RC-2015-06

    Supporting human autonomy in AI systems

    Get PDF
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: 1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; 2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and 3) Autonomy-support must be analysed from at least six spheres of experience in order to approriately capture contradictory and downstream effects

    How does Artificial Intelligence Pose an Existential Risk?

    Get PDF
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential threat to humanity: the control problem, the possibility of global disruption from an AI race dynamic, and the weaponization of AI
    corecore